在人工智能中,我们经常寻求确定许多变量的未知目标函数$ y = f(\ mathbf {x})$给出有限的例子$ s = \ {(\ mathbf {x ^ {(i)}} ,y ^ {(i)})\} $ with $ \ mathbf {x ^ {(i)}} \以$ d $是一个感兴趣的域名。我们将$ S $称为培训集和最终任务是识别近似于新$ \ MATHBF {x} $近似于此目标函数的数学模型;使用$ t \ neq s $(即,测试模型泛化),设置$ t = \ {\ mathbf {x ^ {x ^ {x ^ {x ^ {x ^ {x ^ {x ^ {x ^ {x ^ {x但是,对于某些应用,主要兴趣是近似于较大的域名$ d'$的未知函数,该域为$ d $。例如,在涉及设计新结构的情况下,我们可能有兴趣最大化$ F $;因此,源自$ S $的模型也应该在$ d'$以$ y $大于$ s $ m $的值概括为$ d'$。从这种意义上讲,AI系统将提供重要信息,可以指导设计过程,例如,使用学习模型作为设计新实验室实验的代理功能。通过结合添加剂样条模型,我们基于持续分数的迭代配合来介绍一种多变量回归的方法。我们将其与Adaboost,内核,线性回归,Lasso Lars,线性支持向量回归,多层感知,随机林,随机梯度下降和XGBoost等方法进行比较。我们基于物理化学特性预测超导体临界温度的重要问题的性能。
translated by 谷歌翻译
最近证明利用稀疏网络连接深神经网络中的连续层,可为大型最新模型提供好处。但是,网络连接性在浅网络的学习曲线中也起着重要作用,例如经典限制的玻尔兹曼机器(RBM)。一个基本问题是有效地找到了改善学习曲线的连接模式。最近的原则方法明确将网络连接作为参数,这些参数必须在模型中进行优化,但通常依靠连续功能来表示连接和明确的惩罚。这项工作提出了一种基于网络梯度的想法来找到RBM的最佳连接模式的方法:计算每个可能连接的梯度,给定特定的连接模式,并使用梯度驱动连续连接强度参数又使用确定连接模式。因此,学习RBM参数和学习网络连接是真正共同执行的,尽管学习率不同,并且没有改变目标函数。该方法应用于MNIST数据集,以显示针对样本生成和输入分类的基准任务找到更好的RBM模型。
translated by 谷歌翻译
TMIC是一种应用程序发明家扩展,用于部署ML模型,以在教育环境中使用Google Tochable Machine开发的图像分类。 Google Thotable Machine是一种直观的视觉工具,可为开发用于图像分类的ML模型提供面向工作流的支持。针对使用Google Tochable Machine开发的模型的使用,扩展TMIC可以作为App Inventor的一部分,以tensorflow.js为tensorflow.js导出的受过训练的模型,这是最受欢迎的基于块的编程环境之一,用于教学计算计算K-12。该扩展名是使用基于扩展图片的App Inventor扩展框架创建的,可在BSD 3许可下获得。它可用于在K-12中,在高等教育的入门课程中或有兴趣创建具有图像分类的智能应用程序的任何人。扩展TMIC是由Initiative Computa \ c {C} \ 〜Ao Na Escola的信息学和统计系的圣卡塔纳纳大学/巴西大学提供的研究工作的一部分,旨在在K-中引入AI教育。 12。
translated by 谷歌翻译
Dataset scaling, also known as normalization, is an essential preprocessing step in a machine learning pipeline. It is aimed at adjusting attributes scales in a way that they all vary within the same range. This transformation is known to improve the performance of classification models, but there are several scaling techniques to choose from, and this choice is not generally done carefully. In this paper, we execute a broad experiment comparing the impact of 5 scaling techniques on the performances of 20 classification algorithms among monolithic and ensemble models, applying them to 82 publicly available datasets with varying imbalance ratios. Results show that the choice of scaling technique matters for classification performance, and the performance difference between the best and the worst scaling technique is relevant and statistically significant in most cases. They also indicate that choosing an inadequate technique can be more detrimental to classification performance than not scaling the data at all. We also show how the performance variation of an ensemble model, considering different scaling techniques, tends to be dictated by that of its base model. Finally, we discuss the relationship between a model's sensitivity to the choice of scaling technique and its performance and provide insights into its applicability on different model deployment scenarios. Full results and source code for the experiments in this paper are available in a GitHub repository.\footnote{https://github.com/amorimlb/scaling\_matters}
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Text classification is a natural language processing (NLP) task relevant to many commercial applications, like e-commerce and customer service. Naturally, classifying such excerpts accurately often represents a challenge, due to intrinsic language aspects, like irony and nuance. To accomplish this task, one must provide a robust numerical representation for documents, a process known as embedding. Embedding represents a key NLP field nowadays, having faced a significant advance in the last decade, especially after the introduction of the word-to-vector concept and the popularization of Deep Learning models for solving NLP tasks, including Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Transformer-based Language Models (TLMs). Despite the impressive achievements in this field, the literature coverage regarding generating embeddings for Brazilian Portuguese texts is scarce, especially when considering commercial user reviews. Therefore, this work aims to provide a comprehensive experimental study of embedding approaches targeting a binary sentiment classification of user reviews in Brazilian Portuguese. This study includes from classical (Bag-of-Words) to state-of-the-art (Transformer-based) NLP models. The methods are evaluated with five open-source databases with pre-defined data partitions made available in an open digital repository to encourage reproducibility. The Fine-tuned TLMs achieved the best results for all cases, being followed by the Feature-based TLM, LSTM, and CNN, with alternate ranks, depending on the database under analysis.
translated by 谷歌翻译
Despite being responsible for state-of-the-art results in several computer vision and natural language processing tasks, neural networks have faced harsh criticism due to some of their current shortcomings. One of them is that neural networks are correlation machines prone to model biases within the data instead of focusing on actual useful causal relationships. This problem is particularly serious in application domains affected by aspects such as race, gender, and age. To prevent models from incurring on unfair decision-making, the AI community has concentrated efforts in correcting algorithmic biases, giving rise to the research area now widely known as fairness in AI. In this survey paper, we provide an in-depth overview of the main debiasing methods for fairness-aware neural networks in the context of vision and language research. We propose a novel taxonomy to better organize the literature on debiasing methods for fairness, and we discuss the current challenges, trends, and important future work directions for the interested researcher and practitioner.
translated by 谷歌翻译
Chronic pain is a multi-dimensional experience, and pain intensity plays an important part, impacting the patients emotional balance, psychology, and behaviour. Standard self-reporting tools, such as the Visual Analogue Scale for pain, fail to capture this burden. Moreover, this type of tools is susceptible to a degree of subjectivity, dependent on the patients clear understanding of how to use it, social biases, and their ability to translate a complex experience to a scale. To overcome these and other self-reporting challenges, pain intensity estimation has been previously studied based on facial expressions, electroencephalograms, brain imaging, and autonomic features. However, to the best of our knowledge, it has never been attempted to base this estimation on the patient narratives of the personal experience of chronic pain, which is what we propose in this work. Indeed, in the clinical assessment and management of chronic pain, verbal communication is essential to convey information to physicians that would otherwise not be easily accessible through standard reporting tools, since language, sociocultural, and psychosocial variables are intertwined. We show that language features from patient narratives indeed convey information relevant for pain intensity estimation, and that our computational models can take advantage of that. Specifically, our results show that patients with mild pain focus more on the use of verbs, whilst moderate and severe pain patients focus on adverbs, and nouns and adjectives, respectively, and that these differences allow for the distinction between these three pain classes.
translated by 谷歌翻译
基于连续的潜在空间(例如变异自动编码器)的概率模型可以理解为无数混合模型,其中组件连续取决于潜在代码。它们具有用于生成和概率建模的表达性工具,但与可牵引的概率推断不符,即计算代表概率分布的边际和条件。同时,可以将概率模型(例如概率电路(PC))理解为层次离散混合模型,从而使它们可以执行精确的推断,但是与连续的潜在空间模型相比,它们通常显示出低于标准的性能。在本文中,我们研究了一种混合方法,即具有较小潜在尺寸的可拖动模型的连续混合物。尽管这些模型在分析上是棘手的,但基于一组有限的集成点,它们非常适合数值集成方案。有足够数量的集成点,近似值变得精确。此外,使用一组有限的集成点,可以将近似方法编译成PC中,以“在近似模型中的精确推断”执行。在实验中,我们表明这种简单的方案被证明非常有效,因为PC在许多标准密度估计基准上以这种方式为可拖动模型设定了新的最新模型。
translated by 谷歌翻译
癌症护理中的治疗决策受到随机对照试验(RCT)的治疗效应估计的指导。 RCT估计在某个人群中,一种治疗与另一种治疗的平均效应。但是,治疗可能对人群中的每个患者都不同样有效。了解针对特定患者和肿瘤特征量身定制的治疗的有效性将实现个性化的治疗决策。通过平均RCT中不同患者亚组的结果来获得量身定制的治疗效果,需要大量的患者在所有相关亚组中具有足够的统计能力,以实现所有可能的治疗。美国癌症联合委员会(AJCC)建议研究人员开发结果预测模型(OPMS),以实现个性化治疗决策。 OPM有时称为风险模型或预后模型,使用患者和肿瘤特征来预测患者的结局,例如总体生存。假设这些预测对于使用“只有在OPM预测患者具有高复发风险的情况下开出化学疗法的规则”之类的规则,对治疗决策有用。 AJCC认识到可靠预测的重要性,发布了OPM的清单,以确保设计OPM设计的患者群体的可靠OPM预测准确性。但是,准确的结果预测并不意味着这些预测会产生良好的治疗决策。从这个角度来看,我们表明OPM依靠固定的治疗政策,这意味着被发现可以准确预测验证研究结果的OPM在用于治疗决策的情况下仍会导致患者伤害。然后,我们提供有关如何开发对个性化治疗决策有用的模型以及如何评估模型是否具有决策价值的指导。
translated by 谷歌翻译